98 research outputs found

    Segmentation of Signs for Research Purposes: Comparing Humans and Machines

    Get PDF
    Wordnets have been a popular lexical resource type for many years. Their sense-based representation of lexical items and numerous relation structures have been used for a variety of computational and linguistic applications. The inclusion of different wordnets into multilingual wordnet networks has further extended their use into the realm of cross-lingual research. Wordnets have been released for many spoken languages. Research has also been carried out into the creation of wordnets for several sign languages, but none have yet resulted in publicly available datasets. This article presents our own efforts towards an inclusion of sign languages in a multilingual wordnet, starting with Greek Sign Language (GSL) and German Sign Language (DGS). Based on differences in available language resources between GSL and DGS, we trial two workflows with different coverage priorities. We also explore how synergies between both workflows can be leveraged and how future work on additional sign languages could profit from building on existing sign language wordnet data. The results of our work are made publicly available

    The impact of text segmentation on subtitle reading

    Get PDF
    Understanding the way people watch subtitled films has become a central concern for subtitling researchers in recent years. Both subtitling scholars and professionals generally believe that in order to reduce cognitive load and enhance readability, line breaks in two-line subtitles should follow syntactic units. However, previous research has been inconclusive as to whether syntactic-based segmentation facilitates comprehension and reduces cognitive load. In this study, we assessed the impact of text segmentation on subtitle processing among different groups of viewers: hearing people with different mother tongues (English, Polish, and Spanish) and deaf, hard of hearing, and hearing people with English as a first language. We measured three indicators of cognitive load (difficulty, effort, and frustration) as well as comprehension and eye tracking variables.  Participants watched two video excerpts with syntactically and non-syntactically segmented subtitles. The aim was to determine whether syntactic-based text segmentation as well as the viewers’ linguistic background influence subtitle processing. Our findings show that non-syntactically segmented subtitles induced higher cognitive load, but they did not adversely affect comprehension. The results are discussed in the context of cognitive load, audiovisual translation, and deafness

    Machine Learning for Enhancing Dementia Screening in Ageing Deaf Signers of British Sign Language

    Get PDF
    Real-time hand movement trajectory tracking based on machine learning approaches may assist the early identification of dementia in ageing deaf individuals who are users of British Sign Language (BSL), since there are few clinicians with appropriate communication skills, and a shortage of sign language interpreters. In this paper, we introduce an automatic dementia screening system for ageing Deaf signers of BSL, using a Convolutional Neural Network (CNN) to analyse the sign space envelope and facial expression of BSL signers recorded in normal 2D videos from the BSL corpus. Our approach involves the introduction of a sub-network (the multi-modal feature extractor) which includes an accurate real-time hand trajectory tracking model and a real-time landmark facial motion analysis model. The experiments show the effectiveness of our deep learning based approach in terms of sign space tracking, facial motion tracking and early stage dementia performance assessment tasks

    Monitoring different phonological parameters of sign language engages the same cortical language network but distinctive perceptual ones

    Get PDF
    The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production

    Stimulus rate increases lateralisation in linguistic and non-linguistic tasks measured by functional transcranial Doppler sonography

    Get PDF
    Studies to date that have used fTCD to examine language lateralisation have predominantly used word or sentence generation tasks. Here we sought to further assess the sensitivity of fTCD to language lateralisation by using a metalinguistic task which does not involve novel speech generation: rhyme judgement in response to written words. Line array judgement was included as a non-linguistic visuospatial task to examine the relative strength of left and right hemisphere lateralisation within the same individuals when output requirements of the tasks are matched. These externally paced tasks allowed us to manipulate the number of stimuli presented to participants and thus assess the influence of pace on the strength of lateralisation.In Experiment 1, 28 right-handed adults participated in rhyme and line array judgement tasks and showed reliable left and right lateralisation at the group level for each task, respectively. In Experiment 2 we increased the pace of the tasks, presenting more stimuli per trial. We measured laterality indices (LIs) from 18 participants who performed both linguistic and non-linguistic judgement tasks during the original 'slow' presentation rate (5 judgements per trial) and a fast presentation rate (10 judgements per trial). The increase in pace led to increased strength of lateralisation in both the rhyme and line conditions.Our results demonstrate for the first time that fTCD is sensitive to the left lateralised processes involved in metalinguistic judgements. Our data also suggest that changes in the strength of language lateralisation, as measured by fTCD, are not driven by articulatory demands alone. The current results suggest that at least one aspect of task difficulty, the pace of stimulus presentation, influences the strength of lateralisation during both linguistic and non-linguistic tasks
    corecore